Professional Cloud Developer v1.0

Page:    1 / 19   
Exam contains 283 questions

You have an application in production. It is deployed on Compute Engine virtual machine instances controlled by a managed instance group. Traffic is routed to the instances via a HTTP(s) load balancer. Your users are unable to access your application. You want to implement a monitoring technique to alert you when the application is unavailable.
Which technique should you choose?

  • A. Smoke tests
  • B. Stackdriver uptime checks
  • C. Cloud Load Balancing - heath checks
  • D. Managed instance group - heath checks


Answer : B

Reference:
https://medium.com/google-cloud/stackdriver-monitoring-automation-part-3-uptime-checks-476b8507f59c

You are load testing your server application. During the first 30 seconds, you observe that a previously inactive Cloud Storage bucket is now servicing 2000 write requests per second and 7500 read requests per second. Your application is now receiving intermittent 5xx and 429 HTTP responses from the Cloud Storage
JSON API as the demand escalates. You want to decrease the failed responses from the Cloud Storage API.
What should you do?

  • A. Distribute the uploads across a large number of individual storage buckets.
  • B. Use the XML API instead of the JSON API for interfacing with Cloud Storage.
  • C. Pass the HTTP response codes back to clients that are invoking the uploads from your application.
  • D. Limit the upload rate from your application clients so that the dormant bucket's peak request rate is reached more gradually.


Answer : A

Reference:
https://cloud.google.com/storage/docs/request-rate

Your application is controlled by a managed instance group. You want to share a large read-only data set between all the instances in the managed instance group. You want to ensure that each instance can start quickly and can access the data set via its filesystem with very low latency. You also want to minimize the total cost of the solution.
What should you do?

  • A. Move the data to a Cloud Storage bucket, and mount the bucket on the filesystem using Cloud Storage FUSE.
  • B. Move the data to a Cloud Storage bucket, and copy the data to the boot disk of the instance via a startup script.
  • C. Move the data to a Compute Engine persistent disk, and attach the disk in read-only mode to multiple Compute Engine virtual machine instances.
  • D. Move the data to a Compute Engine persistent disk, take a snapshot, create multiple disks from the snapshot, and attach each disk to its own instance.


Answer : C

You are developing an HTTP API hosted on a Compute Engine virtual machine instance that needs to be invoked by multiple clients within the same Virtual
Private Cloud (VPC). You want clients to be able to get the IP address of the service.
What should you do?

  • A. Reserve a static external IP address and assign it to an HTTP(S) load balancing service's forwarding rule. Clients should use this IP address to connect to the service.
  • B. Reserve a static external IP address and assign it to an HTTP(S) load balancing service's forwarding rule. Then, define an A record in Cloud DNS. Clients should use the name of the A record to connect to the service.
  • C. Ensure that clients use Compute Engine internal DNS by connecting to the instance name with the url https://[INSTANCE_NAME].[ZONE].c. [PROJECT_ID].internal/.
  • D. Ensure that clients use Compute Engine internal DNS by connecting to the instance name with the url https://[API_NAME]/[API_VERSION]/.


Answer : D

Your application is logging to Stackdriver. You want to get the count of all requests on all /api/alpha/* endpoints.
What should you do?

  • A. Add a Stackdriver counter metric for path:/api/alpha/.
  • B. Add a Stackdriver counter metric for endpoint:/api/alpha/*.
  • C. Export the logs to Cloud Storage and count lines matching /api/alpha.
  • D. Export the logs to Cloud Pub/Sub and count lines matching /api/alpha.


Answer : C

You want to re-architect a monolithic application so that it follows a microservices model. You want to accomplish this efficiently while minimizing the impact of this change to the business.
Which approach should you take?

  • A. Deploy the application to Compute Engine and turn on autoscaling.
  • B. Replace the application's features with appropriate microservices in phases.
  • C. Refactor the monolithic application with appropriate microservices in a single effort and deploy it.
  • D. Build a new application with the appropriate microservices separate from the monolith and replace it when it is complete.


Answer : C

Reference:
https://cloud.google.com/solutions/migrating-a-monolithic-app-to-microservices-gke

Your existing application keeps user state information in a single MySQL database. This state information is very user-specific and depends heavily on how long a user has been using an application. The MySQL database is causing challenges to maintain and enhance the schema for various users.
Which storage option should you choose?

  • A. Cloud SQL
  • B. Cloud Storage
  • C. Cloud Spanner
  • D. Cloud Datastore/Firestore


Answer : A

Reference:
https://cloud.google.com/solutions/migrating-mysql-to-cloudsql-concept

You are building a new API. You want to minimize the cost of storing and reduce the latency of serving images.
Which architecture should you use?

  • A. App Engine backed by Cloud Storage
  • B. Compute Engine backed by Persistent Disk
  • C. Transfer Appliance backed by Cloud Filestore
  • D. Cloud Content Delivery Network (CDN) backed by Cloud Storage


Answer : B

Your company's development teams want to use Cloud Build in their projects to build and push Docker images to Container Registry. The operations team requires all Docker images to be published to a centralized, securely managed Docker registry that the operations team manages.
What should you do?

  • A. Use Container Registry to create a registry in each development team's project. Configure the Cloud Build build to push the Docker image to the project's registry. Grant the operations team access to each development team's registry.
  • B. Create a separate project for the operations team that has Container Registry configured. Assign appropriate permissions to the Cloud Build service account in each developer team's project to allow access to the operation team's registry.
  • C. Create a separate project for the operations team that has Container Registry configured. Create a Service Account for each development team and assign the appropriate permissions to allow it access to the operations team's registry. Store the service account key file in the source code repository and use it to authenticate against the operations team's registry.
  • D. Create a separate project for the operations team that has the open source Docker Registry deployed on a Compute Engine virtual machine instance. Create a username and password for each development team. Store the username and password in the source code repository and use it to authenticate against the operations team's Docker registry.


Answer : A

Reference:
https://cloud.google.com/container-registry/

You are planning to deploy your application in a Google Kubernetes Engine (GKE) cluster. Your application can scale horizontally, and each instance of your application needs to have a stable network identity and its own persistent disk.
Which GKE object should you use?

  • A. Deployment
  • B. StatefulSet
  • C. ReplicaSet
  • D. ReplicaController


Answer : B

Reference:
https://livebook.manning.com/book/kubernetes-in-action/chapter-10/46

You are using Cloud Build to build a Docker image. You need to modify the build to execute unit and run integration tests. When there is a failure, you want the build history to clearly display the stage at which the build failed.
What should you do?

  • A. Add RUN commands in the Dockerfile to execute unit and integration tests.
  • B. Create a Cloud Build build config file with a single build step to compile unit and integration tests.
  • C. Create a Cloud Build build config file that will spawn a separate cloud build pipeline for unit and integration tests.
  • D. Create a Cloud Build build config file with separate cloud builder steps to compile and execute unit and integration tests.


Answer : D

Your code is running on Cloud Functions in project A. It is supposed to write an object in a Cloud Storage bucket owned by project B. However, the write call is failing with the error "403 Forbidden".
What should you do to correct the problem?

  • A. Grant your user account the roles/storage.objectCreator role for the Cloud Storage bucket.
  • B. Grant your user account the roles/iam.serviceAccountUser role for the [email protected] service account.
  • C. Grant the [email protected] service account the roles/storage.objectCreator role for the Cloud Storage bucket.
  • D. Enable the Cloud Storage API in project B.


Answer : B

Case study -
This is a case study. Case studies are not timed separately. You can use as much exam time as you would like to complete each case. However, there may be additional case studies and sections on this exam. You must manage your time to ensure that you are able to complete all questions included on this exam in the time provided.
To answer the questions included in a case study, you will need to reference information that is provided in the case study. Case studies might contain exhibits and other resources that provide more information about the scenario that is described in the case study. Each question is independent of the other questions in this case study.
At the end of this case study, a review screen will appear. This screen allows you to review your answers and to make changes before you move to the next section of the exam. After you begin a new section, you cannot return to this section.

To start the case study -
To display the first question in this case study, click the Next button. Use the buttons in the left pane to explore the content of the case study before you answer the questions. Clicking these buttons displays information such as business requirements, existing environment, and problem statements. If the case study has an
All Information tab, note that the information displayed is identical to the information displayed on the subsequent tabs. When you are ready to answer a question, click the Question button to return to the question.

Company Overview -
HipLocal is a community application designed to facilitate communication between people in close proximity. It is used for event planning and organizing sporting events, and for businesses to connect with their local communities. HipLocal launched recently in a few neighborhoods in Dallas and is rapidly growing into a global phenomenon. Its unique style of hyper-local community communication and business outreach is in demand around the world.

Executive Statement -
We are the number one local community app; it's time to take our local community services global. Our venture capital investors want to see rapid growth and the same great experience for new local and virtual communities that come online, whether their members are 10 or 10000 miles away from each other.

Solution Concept -
HipLocal wants to expand their existing service, with updated functionality, in new regions to better serve their global customers. They want to hire and train a new team to support these regions in their time zones. They will need to ensure that the application scales smoothly and provides clear uptime data.

Existing Technical Environment -
HipLocal's environment is a mix of on-premises hardware and infrastructure running in Google Cloud Platform. The HipLocal team understands their application well, but has limited experience in global scale applications. Their existing technical environment is as follows:
* Existing APIs run on Compute Engine virtual machine instances hosted in GCP.
* State is stored in a single instance MySQL database in GCP.
* Data is exported to an on-premises Teradata/Vertica data warehouse.
* Data analytics is performed in an on-premises Hadoop environment.
* The application has no logging.
* There are basic indicators of uptime; alerts are frequently fired when the APIs are unresponsive.

Business Requirements -
HipLocal's investors want to expand their footprint and support the increase in demand they are seeing. Their requirements are:
* Expand availability of the application to new regions.
* Increase the number of concurrent users that can be supported.
* Ensure a consistent experience for users when they travel to different regions.
* Obtain user activity metrics to better understand how to monetize their product.
* Ensure compliance with regulations in the new regions (for example, GDPR).
* Reduce infrastructure management time and cost.
* Adopt the Google-recommended practices for cloud computing.

Technical Requirements -
* The application and backend must provide usage metrics and monitoring.
* APIs require strong authentication and authorization.
* Logging must be increased, and data should be stored in a cloud analytics platform.
* Move to serverless architecture to facilitate elastic scaling.
* Provide authorized access to internal apps in a secure manner.
HipLocal's .net-based auth service fails under intermittent load.
What should they do?

  • A. Use App Engine for autoscaling.
  • B. Use Cloud Functions for autoscaling.
  • C. Use a Compute Engine cluster for the service.
  • D. Use a dedicated Compute Engine virtual machine instance for the service.


Answer : D

Reference:
https://www.qwiklabs.com/focuses/611?parent=catalog

Case study -
This is a case study. Case studies are not timed separately. You can use as much exam time as you would like to complete each case. However, there may be additional case studies and sections on this exam. You must manage your time to ensure that you are able to complete all questions included on this exam in the time provided.
To answer the questions included in a case study, you will need to reference information that is provided in the case study. Case studies might contain exhibits and other resources that provide more information about the scenario that is described in the case study. Each question is independent of the other questions in this case study.
At the end of this case study, a review screen will appear. This screen allows you to review your answers and to make changes before you move to the next section of the exam. After you begin a new section, you cannot return to this section.

To start the case study -
To display the first question in this case study, click the Next button. Use the buttons in the left pane to explore the content of the case study before you answer the questions. Clicking these buttons displays information such as business requirements, existing environment, and problem statements. If the case study has an
All Information tab, note that the information displayed is identical to the information displayed on the subsequent tabs. When you are ready to answer a question, click the Question button to return to the question.

Company Overview -
HipLocal is a community application designed to facilitate communication between people in close proximity. It is used for event planning and organizing sporting events, and for businesses to connect with their local communities. HipLocal launched recently in a few neighborhoods in Dallas and is rapidly growing into a global phenomenon. Its unique style of hyper-local community communication and business outreach is in demand around the world.

Executive Statement -
We are the number one local community app; it's time to take our local community services global. Our venture capital investors want to see rapid growth and the same great experience for new local and virtual communities that come online, whether their members are 10 or 10000 miles away from each other.

Solution Concept -
HipLocal wants to expand their existing service, with updated functionality, in new regions to better serve their global customers. They want to hire and train a new team to support these regions in their time zones. They will need to ensure that the application scales smoothly and provides clear uptime data.

Existing Technical Environment -
HipLocal's environment is a mix of on-premises hardware and infrastructure running in Google Cloud Platform. The HipLocal team understands their application well, but has limited experience in global scale applications. Their existing technical environment is as follows:
* Existing APIs run on Compute Engine virtual machine instances hosted in GCP.
* State is stored in a single instance MySQL database in GCP.
* Data is exported to an on-premises Teradata/Vertica data warehouse.
* Data analytics is performed in an on-premises Hadoop environment.
* The application has no logging.
* There are basic indicators of uptime; alerts are frequently fired when the APIs are unresponsive.

Business Requirements -
HipLocal's investors want to expand their footprint and support the increase in demand they are seeing. Their requirements are:
* Expand availability of the application to new regions.
* Increase the number of concurrent users that can be supported.
* Ensure a consistent experience for users when they travel to different regions.
* Obtain user activity metrics to better understand how to monetize their product.
* Ensure compliance with regulations in the new regions (for example, GDPR).
* Reduce infrastructure management time and cost.
* Adopt the Google-recommended practices for cloud computing.

Technical Requirements -
* The application and backend must provide usage metrics and monitoring.
* APIs require strong authentication and authorization.
* Logging must be increased, and data should be stored in a cloud analytics platform.
* Move to serverless architecture to facilitate elastic scaling.
* Provide authorized access to internal apps in a secure manner.
HipLocal's APIs are having occasional application failures. They want to collect application information specifically to troubleshoot the issue. What should they do?

  • A. Take frequent snapshots of the virtual machines.
  • B. Install the Cloud Logging agent on the virtual machines.
  • C. Install the Cloud Monitoring agent on the virtual machines.
  • D. Use Cloud Trace to look for performance bottlenecks.


Answer : C

Case study -
This is a case study. Case studies are not timed separately. You can use as much exam time as you would like to complete each case. However, there may be additional case studies and sections on this exam. You must manage your time to ensure that you are able to complete all questions included on this exam in the time provided.
To answer the questions included in a case study, you will need to reference information that is provided in the case study. Case studies might contain exhibits and other resources that provide more information about the scenario that is described in the case study. Each question is independent of the other questions in this case study.
At the end of this case study, a review screen will appear. This screen allows you to review your answers and to make changes before you move to the next section of the exam. After you begin a new section, you cannot return to this section.

To start the case study -
To display the first question in this case study, click the Next button. Use the buttons in the left pane to explore the content of the case study before you answer the questions. Clicking these buttons displays information such as business requirements, existing environment, and problem statements. If the case study has an
All Information tab, note that the information displayed is identical to the information displayed on the subsequent tabs. When you are ready to answer a question, click the Question button to return to the question.

Company Overview -
HipLocal is a community application designed to facilitate communication between people in close proximity. It is used for event planning and organizing sporting events, and for businesses to connect with their local communities. HipLocal launched recently in a few neighborhoods in Dallas and is rapidly growing into a global phenomenon. Its unique style of hyper-local community communication and business outreach is in demand around the world.

Executive Statement -
We are the number one local community app; it's time to take our local community services global. Our venture capital investors want to see rapid growth and the same great experience for new local and virtual communities that come online, whether their members are 10 or 10000 miles away from each other.

Solution Concept -
HipLocal wants to expand their existing service, with updated functionality, in new regions to better serve their global customers. They want to hire and train a new team to support these regions in their time zones. They will need to ensure that the application scales smoothly and provides clear uptime data.

Existing Technical Environment -
HipLocal's environment is a mix of on-premises hardware and infrastructure running in Google Cloud Platform. The HipLocal team understands their application well, but has limited experience in global scale applications. Their existing technical environment is as follows:
* Existing APIs run on Compute Engine virtual machine instances hosted in GCP.
* State is stored in a single instance MySQL database in GCP.
* Data is exported to an on-premises Teradata/Vertica data warehouse.
* Data analytics is performed in an on-premises Hadoop environment.
* The application has no logging.
* There are basic indicators of uptime; alerts are frequently fired when the APIs are unresponsive.

Business Requirements -
HipLocal's investors want to expand their footprint and support the increase in demand they are seeing. Their requirements are:
* Expand availability of the application to new regions.
* Increase the number of concurrent users that can be supported.
* Ensure a consistent experience for users when they travel to different regions.
* Obtain user activity metrics to better understand how to monetize their product.
* Ensure compliance with regulations in the new regions (for example, GDPR).
* Reduce infrastructure management time and cost.
* Adopt the Google-recommended practices for cloud computing.

Technical Requirements -
* The application and backend must provide usage metrics and monitoring.
* APIs require strong authentication and authorization.
* Logging must be increased, and data should be stored in a cloud analytics platform.
* Move to serverless architecture to facilitate elastic scaling.
* Provide authorized access to internal apps in a secure manner.
HipLocal has connected their Hadoop infrastructure to GCP using Cloud Interconnect in order to query data stored on persistent disks.
Which IP strategy should they use?

  • A. Create manual subnets.
  • B. Create an auto mode subnet.
  • C. Create multiple peered VPCs.
  • D. Provision a single instance for NAT.


Answer : A

Page:    1 / 19   
Exam contains 283 questions

Talk to us!


Have any questions or issues ? Please dont hesitate to contact us

Certlibrary.com is owned by MBS Tech Limited: Room 1905 Nam Wo Hong Building, 148 Wing Lok Street, Sheung Wan, Hong Kong. Company registration number: 2310926
Certlibrary doesn't offer Real Microsoft Exam Questions. Certlibrary Materials do not contain actual questions and answers from Cisco's Certification Exams.
CFA Institute does not endorse, promote or warrant the accuracy or quality of Certlibrary. CFA® and Chartered Financial Analyst® are registered trademarks owned by CFA Institute.
Terms & Conditions | Privacy Policy